Generalization errors of the simple perceptron

نویسنده

  • Jianfeng Feng
چکیده

To find an exact form for the generalization error of a learning machine is an open problem, even in the simplest case: simple perceptron learning. We introduce a new approach to tackle the problem. The generalization error of the simple perceptron is expressed as a linear combination of extreme values of inputs. With the help of extreme value theory in statistics we then obtain an exact form of the generalization error of the simple perceptron in the case of the worst learning. Generalization errors of the higher-order perceptron taking the form of an inverse power law in the number of examples are also considered.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Estimating Exact Form of Generalisation Errors

Abs t r ac t . A novel approach to estimate generalisation errors of the simple perceptron of the worst case is introduced. It is well known that the generaiisation error of the simple perceptron is of the form d# with an unknown constant d which depends only on the dimension of inputs, where t is the number of learned examples. Based upon extreme value theory in statistics we obtain an exact f...

متن کامل

Singularities in Learning Models: Gaussian Random Field Approach

Singularities are ubiquitous in the parameter space of hierarchical models such as multilayer perceptrons. At singularities, the Fisher information metric degenerates, implying that the Cramér-Rao paradigm does no more hold and the classical model selection theory such as AIC and MDL cannot be applied. It is important to study the relation between the generalization error and the training error...

متن کامل

Learning Algorithms, Input Distributions and Generalization

We study the interaction between input distributions, learning algorithms and nite sample sizes in the case of learning classiication tasks. Focusing on the case of normal input distributions, we use statistical mechanics techniques to calculate the empirical and expected (or generalization) errors for several well-known algorithms learning the weights of a single-layer perceptron. In the case ...

متن کامل

Empirical Risk Minimization Versus Maximum-Likelihood Estimation: a Case Study

We study the interaction between input distributions, learning algorithms and nite sample sizes in the case of learning classiication tasks. Focusing on the case of normal input distributions, we use statistical mechanics techniques to calculate the empirical and expected (or generalization) errors for several well-known algorithms learning the weights of a single-layer perceptron. In the case ...

متن کامل

Multifractal analysis of perceptron learning with errors

Random input patterns induce a partition of the coupling space of a perceptron into cells labeled by their output sequences. Learning some data with a maximal error rate leads to clusters of neighboring cells. By analyzing the internal structure of these clusters with the formalism of multifractals, we can handle different storage and generalization tasks for lazy students and absent– minded te...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1998